indicator value
Reinvestigating the R2 Indicator: Achieving Pareto Compliance by Integration
Schäpermeier, Lennart, Kerschke, Pascal
In multi-objective optimization, set-based quality indicators are a cornerstone of benchmarking and performance assessment. They capture the quality of a set of trade-off solutions by reducing it to a scalar number. One of the most commonly used set-based metrics is the R2 indicator, which describes the expected utility of a solution set to a decision-maker under a distribution of utility functions. Typically, this indicator is applied by discretizing this distribution of utility functions, yielding a weakly Pareto-compliant indicator. In consequence, adding a nondominated or dominating solution to a solution set may - but does not have to - improve the indicator's value. In this paper, we reinvestigate the R2 indicator under the premise that we have a continuous, uniform distribution of (Tchebycheff) utility functions. We analyze its properties in detail, demonstrating that this continuous variant is indeed Pareto-compliant - that is, any beneficial solution will improve the metric's value. Additionally, we provide an efficient computational procedure to compute this metric for bi-objective problems in $\mathcal O (N \log N)$. As a result, this work contributes to the state-of-the-art Pareto-compliant unary performance metrics, such as the hypervolume indicator, offering an efficient and promising alternative.
- Europe > Germany > Saxony > Leipzig (0.04)
- Europe > United Kingdom > England > South Yorkshire > Sheffield (0.04)
- Europe > Portugal (0.04)
- (3 more...)
Correlating Food Safety Data
Let's make a first attempt at a correlation matrix, first by creating our complete dataset, performing some minor preprocessing on the date column (splitting it into year and month), loading in into a dataframe and letting python take care of the rest. The complete code can be found at the end of this post. Time to make the first attempt into producing the correlation matrix. Ok, not bad for a first attempt, but there is a long way ahead it seems. Let's add some more data into the mix!
Estimating regression errors without ground truth values
Tiittanen, Henri, Oikarinen, Emilia, Henelius, Andreas, Puolamäki, Kai
Regression analysis is a standard supervised machine learning method used to model an outcome variable in terms of a set of predictor variables. In most real-world applications we do not know the true value of the outcome variable being predicted outside the training data, i.e., the ground truth is unknown. It is hence not straightforward to directly observe when the estimate from a model potentially is wrong, due to phenomena such as overfitting and concept drift. In this paper we present an efficient framework for estimating the generalization error of regression functions, applicable to any family of regression functions when the ground truth is unknown. We present a theoretical derivation of the framework and empirically evaluate its strengths and limitations. We find that it performs robustly and is useful for detecting concept drift in datasets in several real-world domains.
- North America > United States (0.28)
- Europe > Austria > Vienna (0.14)
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.48)
- Government > Regional Government (0.68)
- Transportation > Air (0.46)
- Consumer Products & Services > Travel (0.46)
Azure ML Feature Engineering – Convert to Indicator Values
Feature engineering is probably one on my favorite aspects of data science. This is the area where domain expertise and creativity can pay high dividends. Essentially feature engineering allows us to come up with our own features or columns to make our models better. We can apply numerous tricks from a variety of tools provided by Azure ML Studio. Convert to Indicator Value is a module that will transform values in a rows of a column into separate columns with binary values.